Error Analysis of Modified Langevin Dynamics
نویسندگان
چکیده
منابع مشابه
Error Analysis of Modified Langevin Dynamics
We consider Langevin dynamics associated with a modified kinetic energy vanishing for small momenta. This allows us to freeze slow particles, and hence avoid the re-computation of inter-particle forces, which leads to computational gains. On the other hand, the statistical error may increase since there are a priori more correlations in time. The aim of this work is first to prove the ergodicit...
متن کاملThe notion of error in Langevin dynamics. I. Linear analysis
The notion of error in practical molecular and Langevin dynamics simulations of large biomolecules is far from understood because of the relatively large value of the timestep used, the short simulation length, and the low-order methods employed. We begin to examine this issue with respect to equilibrium and dynamic time-correlation functions by analyzing the behavior of selected implicit and e...
متن کاملDynamics of Langevin Simulations
This chapter reviews numerical simulations of quantum field theories based on stochastic quantization and the Langevin equation. The topics discussed include renormalization of finite stepsize algorithms, Fourier acceleration, and the relation of the Langevin equation to hybrid stochastic algorithms and hybrid Monte Carlo. Invited chapter to appear in the special supplement “Stochastic Quantiza...
متن کاملMirrored Langevin Dynamics
We generalize the Langevin Dynamics through the mirror descent framework for first-order sampling. The näıve approach of incorporating Brownian motion into the mirror descent dynamics, which we refer to as Symmetric Mirrored Langevin Dynamics (S-MLD), is shown to connected to the theory of Weighted Hessian Manifolds. The S-MLD, unfortunately, contains the hard instance of Cox–Ingersoll–Ross pro...
متن کاملA Hitting Time Analysis of Stochastic Gradient Langevin Dynamics
We study the Stochastic Gradient Langevin Dynamics (SGLD) algorithm for non-convex optimization. The algorithm performs stochastic gradient descent, where in each step it injects appropriately scaled Gaussian noise to the update. We analyze the algorithm’s hitting time to an arbitrary subset of the parameter space. Two results follow from our general theory: First, we prove that for empirical r...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Statistical Physics
سال: 2016
ISSN: 0022-4715,1572-9613
DOI: 10.1007/s10955-016-1544-6